Insight into Primal Augmented Lagrangian Multilplier Method

نویسندگان

  • B. Premjith
  • S. Sachin Kumar
  • Akhil Manikkoth
  • T. V. Bijeesh
  • K. P. Soman
چکیده

We provide a simplified form of Primal Augmented Lagrange Multiplier algorithm. We intend to fill the gap in the steps involved in the mathematical derivations of the algorithm so that an insight into the algorithm is made. The experiment is focused to show the reconstruction done using this algorithm. Keywords-compressive sensing; l1-minimization; sparsity; coherence I.INTRODUCTION Compressive Sensing (CS) is one of the hot topics in the area of signal processing   1,2,3 . The conventional way to sample the signals follows the Shannon’s theorem, i.e., the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate) [4]. For practical signals using CS, the sampling or sensing goes against the Nyquist rate. Consider the sensing matrix, A ( ) nxN A , as the concatenation of two orthogonal matrices,  and  . A signal, n b (non-zero vector) can be represented as the linear combination of columns of Ψ or as a linear combination of columns of . That is in and b   ,  and  are defined clearly. Taking  as identity matrix and  as Fourier transform matrix, we can infer that  is the time domain representation of b and  is the frequencydomain representation. Here we can see that a signal cannot be sparse in both domains at a time. So this phenomena can be extended as, for any arbitrary pair of bases  and  , either  can be sparse or  can be sparse. Now what if the bases are same, the vector b can be constructed using the one of the columns in  and get the smallest possible cardinality (smallest possible number) in  and  . The proximity between the two bases can be defined through mutual-coherence which can be defined as the maximal inner product between columns from two bases. [5,6]. The CS theory says that the signal can be recovered from very few samples or measurements. This is made true based on two principles: sparsity and incoherence. Sparsity: a signal is said to be sparse if it is represented using much less number of sample coefficients without loss of information. The advantage is that sparsity gives fast calculation. CS exploits the fact that the natural signals are sparse in nature when expressed using proper basis [5, 6]. Incoherence: we can understand mutualcoherence between the measurement basis and sparsity basis as , max , i j i j N       . For matrices which are having orthonormal basis, 1   . If the incoherence is higher, the number of measurements required will be smaller [7]. A. Underdetermined Linear System: Consider a matrix nxN A with n N  , and define the underdetermined linear system of equations Ax b  , N x , n b . This means that there are many ways of representing a given signal b (this system has more unknowns than equations, and thus it has either no solution, if b is not in the span of the columns of the matrix A , or infinitely many solutions). In order to avoid the anomaly of having no solution, we shall hereafter assume that A is a full-rank matrix, implying that its columns span the entire space n . Consider an example of image reconstruction. Let b be an image with lower quality, and we need to reconstruct the original image, which is represented as x using a matrix A . Recovering x given A and b constitutes the linear inversion problem. Compressive sensing is the method to find the sparsest solution to some underdetermined system of linear equations. In IRACST International Journal of Computer Science and Information Technology & Security (IJCSITS), Vol. XXX, No. XXX, 2011 compressive sensing point of view b is called the measurement vector, A is called the sensing matrix or measurement matrix and x is the unknown signal. A conventional solution is using the linear least square method which increases the computational time.[8]. The above mentioned problem can be solved through optimization method using 0 l -minimization. 0 min x x Subject to Ax b  The solution to this problem is to find an x that will have few non-zero entries. This is known as 0 l norm. The properties that a norm should satisfy are (1) Zero vectors: 0 v  if and only if 0 v   (2) Absolute homogeneity:, 0 t   , tu t u t   (3) Triangle inequality: u v u v    This problem is NPHard. Candes and Donoho have proved if the signal is sufficiently sparse, the solution can be obtained using 1 l -minimization. The objective function becomes, 1 min x x Subject to Ax b  Which is a convex optimization problem. This minimization tries to find out a vector x whose absolute sum is small among all the possible solutions. This is known as 1 l norm. This problem can be recasted as the linear problem and can be tried to solve using interior-point method. But this becomes computationally complex. Most of the real world applications need to robust to noise, that is, the observation vector, b , can be corrupted by noise. To handle the noise a slight change in the constraint is made. An additional threshold parameter is added which is predetermined based on the noise level. 1 min x x Subject to 2 b Ax T   0 T  is the threshold. For the reconstruction of such signals basis pursuit denoising (BPDN) algorithm is used. [5, 9, 10]. In the light of finding efficient algorithms to solve these problems, several algorithms have been proposed like, Orthogonal Matching Pursuit, PrimalDual Interior-Point Method, Gradient Projection, Homotopy, Polytope Faces Pursuit, Iterative Thresholding, Proximal Gradient, Primal Augmented Lagrange Multiplier, Dual Augmented Lagrange Multiplier [11]. These algorithms can work better when the signal representation is more sparse. Clearly there are infinitely many solutions out which few will give the good reconstruction. To find out a single solution, a regularization in used. Finally what is trying to achieve is a solution which has the minimum norm. II. PRIMAL AUGMENTED LAGRANGE MULTIPLIER METHOD Our aim is to solve the system Ax b  . But in practical case there could be some error. So we write Ax b  . So when it is written in equality form, it ill be Ax r b   , where r is the error of residual . So our main objective is to minimize x and r , such that x is as sparse as possible. So we take 1 l norm of x . The objective function is, 1 2 min 1 , 2 x r x r   (1) Subject to, Ax r b   Where n X C  , m r C  Take the Lagrangian of the form, 1 2 ( , , ) min Re( * ( )) 1 , 2 L x r y x r y Ax r b x r        2 2 Ax r b     (2) Where, m y C  is a multiplier and 0   is a penalty parameter. Since m y C  , instead of T y we use * y (conjugate transpose). Using iterative methods, for a given ( , ) k k x y we obtain 1 1 1 ( , , ) k k k r x y    . We use alternating minimization method to achieve this. i.e., for fixed values of two variables we minimize the other. First, fix k x x  and k y y  and find 1 k r  . Minimize ( , , ) L x r y with respect to r IRACST International Journal of Computer Science and Information Technology & Security (IJCSITS), Vol. XXX, No. XXX, 2011 Find the derivative of ( , , ) L x r  with respect to r 2 ( , , ) 1 1 2 x r L x r y r r r          (3) 2 Re( * ( )) 2 Ax r b y Ax r b r r            Omit all the terms independent of r . 2 r Can be considered as T r r So, eq. (3) will be, ( , , ) 1 Re( * ( )) 2 2 0 2 T L x r r r y Ax r b r r r

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the local quadratic convergence of the primal-dual augmented Lagrangian method

We consider a Primal-Dual Augmented Lagrangian (PDAL) method for optimization problems with equality constraints. Each step of the PDAL requires solving the Primal-Dual linear system of equations. We show that under the standard second-order optimality condition the PDAL method generates a sequence, which locally converges to the primal-dual solution with quadratic rate.

متن کامل

A globally and quadratically convergent primal-dual augmented Lagrangian algorithm for equality constrained optimization

A globally and quadratically convergent primal–dual augmented Lagrangian algorithm for equality constrained optimization Paul Armand & Riadh Omheni To cite this article: Paul Armand & Riadh Omheni (2015): A globally and quadratically convergent primal–dual augmented Lagrangian algorithm for equality constrained optimization, Optimization Methods and Software, DOI: 10.1080/10556788.2015.1025401 ...

متن کامل

A primal-dual augmented Lagrangian

Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primal-dual generalization of the Hestenes-Powell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to bot...

متن کامل

Duality in Nonlinear Programs Using Augmented Lagrangian Functions*

A generally nonconvex optimization problem with equality constraints is studied. The problem is introduced as an “inf sup” of a generalized augmented Lagrangian function. A dual problem is defined as the “sup inf’ of the same generalized augmented Lagrangian. Sufftcient conditions are derived for constructing the augmented Lagrangian function such that the extremal values of the primal and dual...

متن کامل

Computational Complexity of Inexact Gradient Augmented Lagrangian Methods: Application to Constrained MPC

We study the computational complexity certification of inexact gradient augmented Lagrangian methods for solving convex optimization problems with complicated constraints. We solve the augmented Lagrangian dual problem that arises from the relaxation of complicating constraints with gradient and fast gradient methods based on inexact first order information. Moreover, since the exact solution o...

متن کامل

Bundle Relaxation and Primal Recovery in Unit Commitment Problems. The Brazilian Case

We consider the inclusion of commitment of thermal generation units in the optimal management of the Brazilian power system. By means of Lagrangian relaxation we decompose the problem and obtain a nondifferentiable dual function that is separable. We solve the dual problem with a bundle method. Our purpose is twofold: first, bundle methods are the methods of choice in nonsmooth optimization whe...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1312.7637  شماره 

صفحات  -

تاریخ انتشار 2012